perm filename CHAP2[4,KMC]1 blob sn#006467 filedate 1972-10-23 generic text, type T, neo UTF8
00100	CHAPTER 2--SIMULATION MODEL AS EXPLANATION
00200	
00300	
00400	  It is perhaps as difficult  to explain    scientific explanation as it
00500	is to explain anything else. The explanatory practices of different
00600	sciences differ widely but they all share the purpose of someone 
00700	attempting to answer someone else's why-how-what-etc. questions about
00800	a situation, event, object or phenomenon. Thus explanation implies a 
00900	dialogue whose participants share some interests, beliefs, and values.
01000	A consensus must exist about questions and answers. The participants
01100	must agree on what is a sound and reasonable question and what is a
01200	relevant, appropriate, intelligible, and (believed) correct answer.
01300	The explainer tries to satisfy a questioner's curiosity by making
01400	comprehensible why something is the way it is. The answer may be a
01500	definition, an example, a synonym, a story, a theory, a model-description, etc.
01600	The answer satisfies curiosity by settling belief. Nnaturally the task of
01700	satifying the curiosity of a five year old boy is different from that
01800	involving a fifty year old psychiatrist.
01900	    Suppose a man dies and a questioner (Q) asks an expainer (E):
02000	       Q: Why did the man die?
02100	One answer might be:
02200	       E: Because he took cyanide.
02300	This explanation might be sufficient to satisfy Q's curiosity and he
02400	stops asking further questions. Or he might continue:
02500	       Q: Why did the cyanide kill him?
02600	and E replies:
02700	      E: Anyone who ingests cyanide dies.
02800	The latter explanation appeals to a universal generalization under which
02900	is subsumed the particular fact of this man's death. Subsumptive explanations
03000	satisfy some questioners but not others who, for example, might want to
03100	know about the physiological mechanisms involved.
03200	       Q: How does cyanide work in killing people?
03300	       E: It stops respiration so they actually die from lack of oxygen.
03400	If Q has biochemical interests he might inquire further:
03500	       Q: What is cyanide's mechanism of drug action on the respiratory center?
03600	And so on, since there is no bottom to the questions which might be asked.
03700	Nor is there a top:
03800	       Q: Why did the man take cyanide?
03900	       E: Because he was depressed.
04000	       Q: What was he depressed about?
04100	       E: He lost his job.
04200	       Q: How did that happen?
04300	       E: The aircraft company let go most of their engineers because of the cut-back in defense contracts.
04400	Explanations are always incomplete because the top and bottom can be indefinitely
04410	extended and endless questions can be asked at each level.
04500	Just as the participants in explanatory dialogues
04600	decide what is taken to be problematic, so they also determine the termini of
04700	questions and answers. Each discipline has its characteristic stopping points.
04800	    In explanatory dialogues there exist larger and smaller constellations
04900	to refer to which are taken for granted as a nonproblematic background.
05000	Hence in considering paranoid thought processes `it goes without saying'
05100	that a living teleonomic system as the larger constellation strives for
05200	maintenance and expansion of its life using smaller oriented, informed
05300	and constructive subprocesses. Also it goes without saying that at a lower
05400	level ion transport takes place through nerve-cell membranes. Every function
05500	of an organism can be viewed a governing a subfunction beneath and 
05600	depending on a transfunction above which calls it into play for a purpose.
05700	   Just as there are many alternative ways of describing, there are many
05800	alternative ways of explaining. An explanation is geared to some level
05900	of what the dialogue participants take to be the fundamental structures
06000	and processes under consideration. Since in psychiatry we cope with
06100	patients' problems using mainly symbolic-conceptual techniques,(it is true
06200	that one still has a choice between the pill and the knife as well as
06300	the spell), we are interested in aspects of human conduct which can be
06400	explained and understood at a symbol-processing level. Hence I shall
06500	attempt to explain paranoid conversational interactions by describing 
06600	in some detail a simulation of paranoid interview behavior , having in
06700	mind an audience of professionsls and intelligent amateurs in the fields
06800	of psychiatry, psychology, artificial intelligence linguistics and philosophy.
07000	   Symbol processing explanations postulate an underlying intentionalistic
07100	structure of hypothetical mechanisms, goal-directed symbol-processing
07200	procedures, having the power to produce and being responsible for
07300	the manifest phenomena.
07400	Because of uneasiness about the term `mechanism' among some human scientists  
07500	I should make clear that the term is here being used in its broadest
07600	sense to mean procedure, modus operandi, manner of working or functioning -rather than
07700	in the strict classical mechanics sense of masses,forces and momenta.
07800	An algorithm composed of symbolic computational
07900	procedures converts input symbolic structures into output symbolic
08000	structures according to certain principles. The modus operandi
08100	of a symbolic model is simply the workings of an algorithm when run on
08200	a computer. At this level of explanation, to answer `why?' means to provide             
08300	an algorithm which makes explicit how things go together, how things come about, how things are organized to work.
08400	   To simulate the input-output behavior of a system using symbolic
08500	computational procedures we construct a model which produces I/O
08600	behavior resembling that of the subject system being simulated. The
08700	resemblance is achieved through the workings of an inner postulated
08800	structure in the form of an algorithm, an organization of goal-directed
08900	symbol processing procedures which are responsible for the characteristic
09000	observable behavior at the input-output level. Since we do not know the
09100	structure of the `real' simulative mechanisms used by the mind-brain,
09200	our postulated structure stands as an imaginary theoretical entity,
09300	a possible and plausible organization of procedures analogous to the
09400	unknown mechanisms and serving as an attempt to explain the workings
09500	of the system under study. A simulation model is thus deeper than a
09600	pure black-box explanation because it postulates functionally equivalent
09700	mechanisms inside the box to account for observable patterns of I/O
09800	behavior. A simulation model constitutes an interpretive explanation
09900	in that it makes intelligible the connections between external input
10000	internal states and output by postulating intervening mechanisms operating
10100	between symbolic input and symbolic output. An intelligible description
10200	of the model should make clear why and how it reacts as it does under
10300	various circumstances.
10400	    To cite a universal generalization about human behavior is
10500	unsatisfactory to a questioner who is interested in what powers and
10600	capacities are latent behind manifest phenomena. To say `x is nasty
10700	because x is paranoid and all paranoids are nasty' may be relevant,
10800	intelligible and correct but it does not cite a mechanism which accounts
10900	for `nasty' behavior as a consequence of input and internal states of
11000	a system. A model explanation specifies antecedants and mechanisms
11100	through which antecedants generate the phenomena. This approach to
11200	explanation assumes perceptible phenomena display the regularities and
11300	irregularities they do because of the nature of a (currently) imperceptible
11400	underlying structure.
11500	   When attempts are made to explain human behavior, principles in
11600	addition to those accounting for the natural order are invoked. `Nature
11700	entertains no opinions about us' said Nietsche but other humans do and
11800	therin lies a  source of complexity for human symbol-processing systems.
11900	  Natural sciences such as physics have been guided by the Newtonian ideal
12000	of perfect process knowledge about inanimate objects whose behavior can
12100	be subsumed under lawlike generalizations. When a deviation from a law is
12200	noticed, it is the law which must be modified, not the deviating object.
12300	When the planet Mercury was observed to deviate from the orbit predicted
12400	by Newtonian theory, no one accused the planet of being an intentional agent
12500	breaking the law.  Subsumptive explanation is quite acceptable in physics
12600	but it is seldom satisfactory in accounting for the behavior
12700	of living intentionalistic systems. In considering the behavior of falling bodies
12800	no one nowadays follows the Aristotelian pattern of attributing an intention
12900	to fall to the object in question. But in the case of living systems, especially
13000	ourselves, our ideal explanatory practice remains Aristotelian in utilizing
13100	a concept of intention.(Aristotle was not wrong about everything).
13200	   Consider a man participating in a high-diving contest. In falling towards
13300	the water he falls at the rate of 32 feet per second per second. Viewing
13400	the man simply as a falling body, we explain his rate of fall by appealing to a physical
13500	law. Viewing the man as a human agent, we explain his dive as the result
13600	of an intention to dive in a cetain way in order to win the diving contest.
13700	His action (in contrast to mere movement) involves an intended following
13800	of certain conventional rules for what is judged by humans to constitute
13900	a swan dive. Suppose part way down he chooses to change his position in
14000	mid-air and enter the water thumbing his nose at the judges. Here he breaks
14100	the rules for diving and elects to perform an action he considers
14200	disrespectful. To explain the actions of diving and nose-thumbing, we
14300	would appeal, not to laws of natural order, but to an additional order, to
14400	principles of human order, superimposed on laws of natural order and which
14500	take into account (1)standards of appropriate action in certain situations
14600	and (2) the agents inner considerations of intention, belief and value about
14700	those situations which he finds compelling from his point of view.
14800	   In this type of explanation the explanandum, that which is being explained
14900	is the agent's informed actions, not simply his movements. When a human
15000	agent performs an action in a situation, we can ask:(1) is the action
15100	appropriate to that situation and if not, why did the agent believe his
15200	action to be called for.
15300	   As will be shown, symbol-processing explanations rely on concepts 
15400	of action, intention, belief, affect, preference, etc. These terms are
15500	close to the terms of everday language as is characteristic of the early
15600	stages of an explanatory science.Also they are suitable for
15700	describing intentionalistic algorithms in which final causes guide efficient causes. In
15800	an algorithm these everday terms can be explicitly defined and
15900	represented.
16000	   Psychiatry deals with the practical concerns of inappropriate action,
16100	belief, etc. on the part of a patient. His behavior may be inappropriate
16200	to the onlooker since it represents a lapse from the expected, a
16300	contravention of the human order. It may even appear this way to the 
16400	patient as a spectator of himself.But sometimes, as in the paranoid mode
16500	the patient's behavior does not appear anomalous to himself. He maintains
16600	that anyone who understands his point of view, who conceptualizes
16700	situations as he does from the inside, would consider his outer behavior
16800	appropriate and justified. What he does not understand or accept is
16900	that his inner conceptualization is mistaken and represents a misinterpretation
17000	of the events of his experience.
17100	    The model to be presented in the sequel constitutes an attempt to
17200	explain some regularities and particular occurrences of conversational
17300	paranoid phenomena observable in the clinical situation of a psychiatric
17400	interview. The explanation is at the symbol-processing level of
17500	linguistically communicating agents and is cast in the form of a dialogue
17600	algorithm. Like all explanations it is only partially accurate, incomplete
17700	and does not claim to represent the only organization of mechanisms
17800	possible.
17900	
18000	                 ALGORITHMS
18100	
18200	   Theories can be presented in various forms such as natural language
18300	assertions, mathematical equations and computer programs. To date most
18400	theoretical explanations in psychiatry and psychology have consisted
18500	of natural language essays with all their well-known vagueness and
18600	ambiguities.Many of these formulations have been untestable, not because
18800	relevant observations were lacking but because it was unclear what
18900	the essay was really saying. Clarity is needed.
19000	     An alternative way of formulating psychological theories is now
19100	available in the form of an algorithm, a computer program, which has
19200	the virtue of being clear and explicit in its articulation and which
19300	can be run on a computer to test its internal consistency and coherence.  
19400	Since we do not know the real mechanisms at ,say, a perceptible molecular
19500	level, we construct a theoretical model which represents a partial
19600	paramorphic analogue. (See Harre, 1972). The analogy is at the symbol-
19700	processing level, not at the hardware level. A functional, computational
19800	or procedural equivalence is being postulated. The question then becomes
19900	one of determining the degree of the equivalence. Weak functional equivalence
20000	consists of indistinguishability at the outermost input-output level.
20100	Strong equivalence means correspondence at each inner I/O level, that is
20200	there exists a match not only between what is being done but how it is
20300	being done at a given level of operations.(These points will be discussed
20310	in greater detail in Chapter 3).
20400	   An algorithm represents an organization of procedures or functions
20500	which represents `effective procedure'. It is essential to grasp this concept.
20600	An effective procedure consists of two ingredients:
20700	       (1) A programming language in which procedural rules of behavior
20800	          can be rigorously and unambiguously specified.
20900	     (2) A machine processor which can rapidly and reliably carry out
21000	          the processes specified by the procedural rules.
21100	The specifications of (1) written in a formally defined programming
21200	language, is termed an algorithm or program while (2) involves a computer
21300	as the machine processor, a set of deterministic physical mechanisms
21400	which can perform the operations specified in the algorithm. The
21500	algorithm is called `effective' because it actually works, performing
21600	as intended when run on the machine processor.
21700	     It is worth remphasizing that a simulation model postulates
21800	procedures analogous to the real and unknown procedures. The analogy being 
21900	drawn here is between specified processes and their generating systems.
22000	Thus
22100	
22200	      mental process    computational process
22300	      --------------:: ----------------------
22400	    brain hardware      computer hardware and
22500	    and programs           programs
22600	The analogy is not simply between computer hardware and brain wetware.
22700	We are not comparing the structure of neurons with the structure of
22800	transisitors; we are comparing the organization of symbol-processing
22900	procedures in an algorithm with symbol-processing procedures of the
23000	mind-brain. The central nervous system contains a representation of
23010	the experience of its holder. A model builder has a conceptual representation
23020	of that representation which he demonstrates in the form of an algorithm.
23030	Thus an algorithm is a demonstration of a  representation of a representation.
23100	    When an algorithm runs on a computer the postulated explanatory
23200	structure becomes actualized, not described. (To describe the model
23300	is to present , among other things, its embodied theory). A simulation model such as the
23400	one presented here can be interacted with by a person at the linguistic
23500	level as a communicating object in the world. Its communicative behavior
23600	can be experienced in a concrete form by a human observer-actor.
23700	Thus it can be known by acquaintance, by first-hand knowledge, as well
23800	as by the second-hand knowledge of description.
23900	   Since the algoritm is written in a programming language, it can be
24000	read directly only by a few people, most of whom do not enjoy reading
24100	other people's code. Hence the intelligibility requirement for explanations
24200	must be met in other ways. In an attempt to open the model to scrutiny
24300	I shall describe the model in detail using diagrams and interview
24400	examples profusely.